对人类流动性进行建模有助于了解人们如何访问资源并在城市中彼此进行身体接触,从而有助于各种应用,例如城市规划,流行病控制和基于位置的广告。下一个位置预测是单个人类移动性建模中的一项决定性任务,通常被视为序列建模,用Markov或基于RNN的方法解决。但是,现有模型几乎不关注单个旅行决策的逻辑和人口集体行为的可重复性。为此,我们提出了一个因果关系和空间约束的长期和短期学习者(CSLSL),以进行下一个位置预测。 CSLSL利用基于多任务学习的因果结构来明确对“ $ \ rightarrow $ wher wher wher wher whit $ \ rightarrow $ where where where”,a.k.a.”接下来,我们提出一个空间约束损失函数作为辅助任务,以确保旅行者目的地的预测和实际空间分布之间的一致性。此外,CSLSL采用了名为Long and Short-Charturer(LSC)的模块,以了解不同时间跨度的过渡规律。在三个现实世界数据集上进行的广泛实验表明,CSLSL的性能改善了基准,并确认引入因果关系和一致性约束的有效性。该实现可在https://github.com/urbanmobility/cslsl上获得。
translated by 谷歌翻译
目前,下一个位置推荐在基于位置的社交网络应用程序和服务中起着重要作用。虽然已经提出了许多方法来解决这个问题,但到目前为止,三个重要挑战尚未得到很好的解决:(1)大多数现有方法基于经常性网络,这是耗费训练长期序列,因为不允许完整的平行度; (2)个性化偏好通常不被认为是合理的; (3)现有方法很少系统地研究了如何在轨迹数据中有效地利用各种辅助信息(例如,用户ID和时间戳)和非连续位置之间的时空关系。为了解决上述挑战,我们提出了一种名为SANMOVE的新型方法,是一种自我关注网络的模型,通过捕获用户的长期和短期移动模式来预测下一个位置。具体而言,SANMOVE引入了一个长期偏好学习模块,它使用自我关注模块来捕获用户的长期移动模式,可以代表用户的个性化位置偏好。同时,SanMove使用空间延伸的非侵入自我关注(Stnova)来利用辅助信息来学习短期偏好。我们使用两个真实世界数据集进行评估SANMOVE,并演示SANMOVE不仅比基于最先进的RNN的预测模型更快,而且还优于下一个位置预测的基线。
translated by 谷歌翻译
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts. Recent debiasing methods usually treated features learned by small models or big models at earlier steps as bias features and proposed to exclude the branch learning those bias features during inference. However, most of these methods fail to disentangle the ``good'' stance features and ``bad'' bias features in the text part. In this paper, we investigate how to mitigate dataset bias in stance detection. Motivated by causal effects, we leverage a novel counterfactual inference framework, which enables us to capture the dataset bias in the text part as the direct causal effect of the text on stances and reduce the dataset bias in the text part by subtracting the direct text effect from the total causal effect. We novelly model bias features as features that correlate with the stance labels but fail on intermediate stance reasoning subtasks and propose an adversarial bias learning module to model the bias more accurately. To verify whether our model could better model the interaction between texts and targets, we test our model on recently proposed test sets to evaluate the understanding of the task from various aspects. Experiments demonstrate that our proposed method (1) could better model the bias features, and (2) outperforms existing debiasing baselines on both the original dataset and most of the newly constructed test sets.
translated by 谷歌翻译
Causal Emotion Entailment aims to identify causal utterances that are responsible for the target utterance with a non-neutral emotion in conversations. Previous works are limited in thorough understanding of the conversational context and accurate reasoning of the emotion cause. To this end, we propose Knowledge-Bridged Causal Interaction Network (KBCIN) with commonsense knowledge (CSK) leveraged as three bridges. Specifically, we construct a conversational graph for each conversation and leverage the event-centered CSK as the semantics-level bridge (S-bridge) to capture the deep inter-utterance dependencies in the conversational context via the CSK-Enhanced Graph Attention module. Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion. Experimental results show that our model achieves better performance over most baseline models. Our source code is publicly available at https://github.com/circle-hit/KBCIN.
translated by 谷歌翻译
Stereo images, containing left and right view images with disparity, are utilized in solving low-vision tasks recently, e.g., rain removal and super-resolution. Stereo image restoration methods usually obtain better performance than monocular methods by learning the disparity between dual views either implicitly or explicitly. However, existing stereo rain removal methods still cannot make full use of the complementary information between two views, and we find it is because: 1) the rain streaks have more complex distributions in directions and densities, which severely damage the complementary information and pose greater challenges; 2) the disparity estimation is not accurate enough due to the imperfect fusion mechanism for the features between two views. To overcome such limitations, we propose a new \underline{Stereo} \underline{I}mage \underline{R}ain \underline{R}emoval method (StereoIRR) via sufficient interaction between two views, which incorporates: 1) a new Dual-view Mutual Attention (DMA) mechanism which generates mutual attention maps by taking left and right views as key information for each other to facilitate cross-view feature fusion; 2) a long-range and cross-view interaction, which is constructed with basic blocks and dual-view mutual attention, can alleviate the adverse effect of rain on complementary information to help the features of stereo images to get long-range and cross-view interaction and fusion. Notably, StereoIRR outperforms other related monocular and stereo image rain removal methods on several datasets. Our codes and datasets will be released.
translated by 谷歌翻译
Deep learning technology has made great progress in multi-view 3D reconstruction tasks. At present, most mainstream solutions establish the mapping between views and shape of an object by assembling the networks of 2D encoder and 3D decoder as the basic structure while they adopt different approaches to obtain aggregation of features from several views. Among them, the methods using attention-based fusion perform better and more stable than the others, however, they still have an obvious shortcoming -- the strong independence of each view during predicting the weights for merging leads to a lack of adaption of the global state. In this paper, we propose a global-aware attention-based fusion approach that builds the correlation between each branch and the global to provide a comprehensive foundation for weights inference. In order to enhance the ability of the network, we introduce a novel loss function to supervise the shape overall and propose a dynamic two-stage training strategy that can effectively adapt to all reconstructors with attention-based fusion. Experiments on ShapeNet verify that our method outperforms existing SOTA methods while the amount of parameters is far less than the same type of algorithm, Pix2Vox++. Furthermore, we propose a view-reduction method based on maximizing diversity and discuss the cost-performance tradeoff of our model to achieve a better performance when facing heavy input amount and limited computational cost.
translated by 谷歌翻译
与卷积神经网络(CNN)相比,视觉变压器(VIT)表现出了有希望的性能,但是VIT的训练比CNN难得多。在本文中,我们定义了几个指标,包括动态数据比例(DDP)和知识同化率(KAR),以研究训练过程,并将其分为三个时期:形成,增长和探索。特别是,在训练的最后阶段,我们观察到只有很小的训练示例用于优化模型。鉴于VIT的数据渴望的性质,我们提出了一个简单但重要的问题:在培训的每个阶段,是否有可能提供丰富的``有效''培训示例吗?为了解决这个问题,我们需要解决两个关键问题,即\ ie,如何衡量单个培训示例的``有效性'',以及如何系统地生成足够数量的``有效''示例。为了回答第一个问题,我们发现训练样本的``困难''可以作为衡量培训样本的``有效性''的指标。为了解决第二个问题,我们建议在这些演化阶段动态调整训练数据的``难度''分布。为了实现这两个目的,我们提出了一个新颖的以数据为中心的VIT培训框架,以动态测量训练样本的``难度'',并为不同培训阶段的模型生成``有效的''样品。此外,为了进一步扩大``有效''样品的数量,并减轻了VIT的后期训练阶段的过度拟合问题,我们提出了一种称为Patcherasing的补丁级擦除策略。广泛的实验证明了提出的以数据为中心的VIT培训框架和技术的有效性。
translated by 谷歌翻译
关于多模式情绪识别的最新作品转向端到端模型,该模型可以提取与两阶段管道相比,目标任务监督的特定任务特征。但是,以前的方法仅模拟文本和声学和视觉方式之间的特征相互作用,而忽略了捕获声学和视觉方式之间的特征相互作用。在本文中,我们提出了多模式的端到端变压器(ME2ET),该变压器可以有效地对低级和高级水平的文本,声学和视觉方式之间的三模式特征进行建模。在低水平,我们提出了进行性三模式的注意,可以通过采用两次通行策略来对三模式特征相互作用进行建模,并可以进一步利用这种相互作用,以通过降低输入令牌来显着降低计算和记忆复杂性长度。在高水平上,我们引入了三模式特征融合层,以明确汇总三种模式的语义表示。 CMU-MOSEI和IEMOCAP数据集的实验结果表明,ME2ET实现了最新性能。进一步的深入分析证明了拟议的渐进三模式关注的有效性,效率和解释性,这可以帮助我们的模型实现更好的性能,同时显着降低计算和记忆成本。我们的代码将公开可用。
translated by 谷歌翻译
由于方面级别的情感标签是昂贵且富有劳动力的,因此提出了零击方面的情感分类,以学习适用于新域的分类器,而无需使用任何带注释的方面级别数据。相比之下,更容易访问具有评分的文档级别的情感数据。在这项工作中,我们仅使用文档级评论来实现零击方面的情感分类。我们的关键直觉是,文档的情感表示由该文档的所有方面的情感表示组成。基于此,我们提出了AF-DSC方法,以在评论中明确建模此类情感组成。 AF-DSC首先学习所有潜在方面的情感表示形式,然后将方面级别的情感汇总到文档级的情感上,以执行文档级别的情感分类。通过这种方式,我们将其作为文档级别分类器的副产品获得方面级别的分类器。方面情感分类基准的实验结果证明了在文档级别分类中明确利用情感组成的有效性。我们的模型只有30k培训数据的表现优于先前的工作,利用数百万个数据。
translated by 谷歌翻译
在不同情况下,已经探索了相对旋转和翻译估计任务的最小解决方案,通常依赖于所谓的共同可见度图。但是,如何在没有重叠的两个框架之间建立直接旋转关系仍然是一个公开主题,如果解决了,它可以大大提高视觉尾声的准确性。在本文中,提出了一种新的最小解决方案,以通过利用新的图形结构来求解两个图像之间没有重叠区域的相对旋转估计,我们将其称为扩展性图(E-Graph)。与共同可见度图不同,高级标志(包括消失方向和平面正常)存储在我们的电子图纸中,这些图形在几何上可扩展。基于电子图表,旋转估计问题变得更简单,更优雅,因为它可以处理纯粹的旋转运动,并且需要更少的假设,例如曼哈顿/亚特兰大世界,平面/垂直运动。最后,我们将旋转估计策略嵌入完整的相机跟踪和映射系统中,该系统获得了6-DOF相机姿势和密集的3D网格模型。对公共基准测试的广泛实验表明,所提出的方法实现了最新的跟踪性能。
translated by 谷歌翻译